11 research outputs found
Meta-F*: Proof Automation with SMT, Tactics, and Metaprograms
We introduce Meta-F*, a tactics and metaprogramming framework for the F*
program verifier. The main novelty of Meta-F* is allowing the use of tactics
and metaprogramming to discharge assertions not solvable by SMT, or to just
simplify them into well-behaved SMT fragments. Plus, Meta-F* can be used to
generate verified code automatically.
Meta-F* is implemented as an F* effect, which, given the powerful effect
system of F*, heavily increases code reuse and even enables the lightweight
verification of metaprograms. Metaprograms can be either interpreted, or
compiled to efficient native code that can be dynamically loaded into the F*
type-checker and can interoperate with interpreted code. Evaluation on
realistic case studies shows that Meta-F* provides substantial gains in proof
development, efficiency, and robustness.Comment: Full version of ESOP'19 pape
Foundational Property-Based Testing
International audienceIntegrating property-based testing with a proof assistant creates an interesting opportunity: reusable or tricky testing code can be formally verified using the proof assistant itself. In this work we introduce a novel methodology for formally verified property-based testing and implement it as a foundational verification framework for QuickChick, a port of QuickCheck to Coq. Our framework enables one to verify that the executable testing code is testing the right Coq property. To make verification tractable, we provide a systematic way for reasoning about the set of outcomes a random data generator can produce with non-zero probability, while abstracting away from the actual probabilities. Our framework is firmly grounded in a fully verified implementation of QuickChick itself, using the same underlying verification methodology. We also apply this methodology to a complex case study on testing an information-flow control abstract machine, demonstrating that our verification methodology is modular and scalable and that it requires minimal changes to existing code
Verified Optimizations for Functional Languages
Coq is one of the most widely adopted proof development systems. Itallows programmers to write purely functional programs and verify them
against specifications with machine-checked proofs. After
verification, one can use Coq's extraction plugin to obtain a program
(in OCaml, Haskell, or Scheme) that can be compiled and
executed. However, bugs in either the extraction function or the
compiler of the extraction language can render source-level
verification useless.
A verified compiler is a compiler whose output provably preserves thesemantics of the source language. CertiCoq is a verified compiler,
currently under development, for Coq's specification language,
Gallina. CertiCoq targets Clight, a subset of the C language, that
can be compiled with the CompCert verified compiler to obtain a
certified executable, bridging the gap between the formally verified
source program and the compiled target program.
In this thesis, I present the implementation and verification ofCertiCoq's optimizing middle-end pipeline. CertiCoq's middle end
consists of seven different transformations and is responsible for
efficiently compiling an untyped purely functional intermediate
language to a subset of the same language, which can be readily
compiled to a first-order, low-level intermediate language.
CertiCoq's middle-end pipeline performs crucial optimizations for
functional languages including closure conversion, uncurrying,
shrink-reduction and inlining. It advances the state of the art of
verified optimizing compilers for functional languages by implementing
more efficient closure-allocation strategies.
For proving CertiCoq correct, I develop a framework based on thetechnique of logical relations, making novel technical contributions.
I extend logical relations with notions of relational preconditions
and postconditions that facilitate reasoning about the resource
consumption of programs simultaneously with functional correctness.
I demonstrate how this enables reasoning about preservation of
non-terminating behaviors, which is not supported by traditional
logical relations. Moreover, I develop a novel, lightweight technique
that allows logical-relation proofs to be composed in order to obtain
a top-level compositional compiler correctness theorem. This
technique is used to obtain a separate compilation theorem that
guarantees that programs compiled separately through CertiCoq using
different sets of optimizations can be safely linked at the target
level. Lastly, I use the framework to prove that CertiCoq's closure
conversion is not only functionally correct but also safe for time and
space, meaning that it is guaranteed to preserve the asymptotic time
and space complexity of the source program
Recommended from our members
Compositional optimizations for CertiCoq
Compositional compiler verification is a difficult problem that focuses on separate compilation of program components with possibly different verified compilers. Logical relations are widely used in proving correctness of program transformations in higher-order languages; however, they do not scale to compositional verification of multi-pass compilers due to their lack of transitivity. The only known technique to apply to compositional verification of multi-pass compilers for higher-order languages is parametric inter-language simulations (PILS), which is however significantly more complicated than traditional proof techniques for compiler correctness. In this paper, we present a novel verification framework for lightweight compositional compiler correctness. We demonstrate that by imposing the additional restriction that program components are compiled by pipelines that go through the same sequence of intermediate representations, logical relation proofs can be transitively composed in order to derive an end-to-end compositional specification for multi-pass compiler pipelines. Unlike traditional logical-relation frameworks, our framework supports divergence preservation—even when transformations reduce the number of program steps. We achieve this by parameterizing our logical relations with a pair of relational invariants.
We apply this technique to verify a multi-pass, optimizing middle-end pipeline for CertiCoq, a compiler from Gallina (Coq’s specification language) to C. The pipeline optimizes and closure-converts an untyped functional intermediate language (ANF or CPS) to a subset of that language without nested functions, which can be easily code-generated to low-level languages. Notably, our pipeline performs more complex closure-allocation optimizations than the state of the art in verified compilation. Using our novel verification framework, we prove an end-to-end theorem for our pipeline that covers both termination and divergence and applies to whole-program and separate compilation, even when different modules are compiled with different optimizations. Our results are mechanized in the Coq proof assistant
Field surveys can improve predictions of habitat suitability for reintroductions : a swift fox case study
Reintroductions are challenging, and success rates are low despite extensive planning and considerable investment of resources. Improving predictive models for reintroduction planning is critical for achieving successful outcomes. The IUCN Guidelines for Reintroductions and Other Conservation Translocations recommend that habitat suitability assessments account for abiotic and biotic factors specific to the species to be reintroduced and, where needed, include habitat quality variables. However, habitat assessments are often based on remotely-sensed or existing geographical data that do not always reliably represent habitat quality variables. We tested the contribution of ground-based habitat quality metrics to habitat suitability models using a case study of the swift fox Vulpes velox, a mesocarnivore species for which a reintroduction is planned. Field surveys for habitat quality included collection of data on the main threat to the swift fox (the coyote Canis latrans), and for swift fox prey species. Our findings demonstrated that the inclusion of habitat quality variables derived from field surveys yielded better fitted models and a 16% increase in estimates of suitable habitat. Models including field survey data and models based only on interpolated geographical and remotely-sensed data had little overlap (38%), demonstrating the significant impact that different models can have in determining appropriate locations for a reintroduction. We advocate that ground-based habitat metrics be included in habitat suitability assessments for reintroductions of mesocarnivores.Publisher PDFPeer reviewe
Property-Based Testing via Proof Reconstruction
International audienceProperty-based testing (PBT) is a technique for validating code against an executable specification by automatically generating test-data. We present a proof-theoretical reconstruction of this style of testing for relational specifications and employ the Foun-dational Proof Certificate framework to describe test generators. We do this by presenting certain kinds of "proof outlines" that can be used to describe various common generation strategies in the PBT literature, ranging from random to exhaustive, including their combination. We also address the shrinking of counterexamples as a first step towards their explanation. Once generation is accomplished, the testing phase boils down to a standard logic programming search. After illustrating our techniques on simple, first-order (algebraic) data structures, we lift it to data structures containing bindings using λ-tree syntax. The λProlog programming language is capable of performing both the generation and checking of tests. We validate this approach by tackling benchmarks in the metatheory of programming languages coming from related tools such as PLT-Redex. CCS CONCEPTS • Software and its engineering → Formal software verification; • Theory of computation → Logic and verification; Proof theory
Aspects of animals in Chinghiz Aitmatov's writings
Předložená diplomová práce je věnována tvorbě významného kirgizského spisovatele druhé poloviny XX století Čingize Ajtmatova. Práce obsahuje čtyři kapitoly; v první části jsou uvedena některá autorova životopisná fakta a nastíněna jeho tvůrčí evoluce, ve třetí a čtvrté kapitole jsou analyzována jeho některá díla (román Popraviště, novely Sbohem trpký živote, Strakatý pes na břehu moře a Bílá loď).
Hlavním cílem práce je ukázat, že v umělecké struktuře sledovaných textů hrají důležitou roli obrazy zvířat, což svědčí o těsném propojení Ajtmatovovy tvorby s mytologií a folklórem.Katedra ruského a francouzského jazykaObhájenoThis thesis is dedicated to the works of remarkable Kirghiz writer Chinghiz Aitmatov from second half of 20th century. The thesis consists of four chapters; the first one provides the information about his life and his creative evolution, in the third and the fours chapters some of his writings (novel "The Scaffold" (1986), stories "Farewell, Gulsary!" (1966), "Spotted Dog Running On Seashore" (1977), "The White Steamboat" (1970) are being analyzed.
The main objective of the thesis is to show that in art structures of the writings, mentioned above, animal images play important role which testifies close connection of Aitmatov?s work with folklore and mythology